A Survey of FPT Algorithm Design Techniques with an Emphasis on Recent Advances and Connections to Practical Computing
نویسنده
چکیده
The talk will survey the rich toolkit of FPT algorithm design methodologies that has developed over the last 20 years, including “older” techniques such as well-quasi-ordering, bounded treewidth, color-coding, reduction to a problem kernel, and bounded search trees — which have continued to deepen and advance — as well as some recently developed approaches such as win/win’s, greedy localization, iterative compression, and crown decompositions. Only in the last few years has it become clear that there are two distinct theoretical “games” being played in the effort to devise better and better FPT algorithms for various problems, that is, algorithms with a running time of O(f(k)n), where c is a fixed constant, k is the parameter, and n is the total input size. The first of these games is the obvious one of improving the parameter cost function f(k). For example, the first FPT algorithm for Vertex Cover had a running time of 2n. After a series of efforts, the best currently known algorithm (due to Chandran and Grandoni, in a paper to be presented at IWPEC 2004), has a parameter cost function of f(k) = (1.2745)kk4. The second theoretical game being played in FPT algorithm design has to do with efficient kernelization. The naturality and universality of this game comes from the observation that a parameterized problem is FPT if and only if there is a polynomial time preprocessing (also called data reduction or kernelization) algorithm that transforms the input (x, k) into (x′, k′), where k′ ≤ k and |x′| ≤ g(k) for some kernel-bounding function g(k), and where of course (x, k) is a yes-instance if and only if (x′, k′) is a yes-instance. In other words, modulo polynomial-time pre-processing, all of the difficulty, and even the size of the input that needs to be considered, is bounded by a function of the parameter. The natural theoretical game from this point of view about FPT is to improve the kernel-bounding function g(k). For example, after strenous efforts, a kernel-bounding function of g(k) = 335k has recently been achieved (by Alber et al.) for the Planar Dominating Set problem. Some of the new techniques to be surveyed pertain to the f(k) game, some address the kernelization game, and some are relevant to both of these goals in improving FPT algorithms. One of the striking things about the “positive” toolkit of FPT algorithm design is how unsettled the area seems to be, with S. Albers and T. Radzik (Eds.): ESA 2004, LNCS 3221, pp. 1–2, 2004. c © Springer-Verlag Berlin Heidelberg 2004
منابع مشابه
3. Parameterized Complexity: The Main Ideas and Connections to Practical Computing
Research in the parameterized framework of complexity analysis, and on the corresponding toolkit of algorithm design methods has been expanding rapidly in recent years. This has led to a flurry of recent surveys, all of which are good sources of introductory material [3.46, 3.42, 3.22, 3.24, 3.3, 3.32, 3.33]. One could also turn to the monograph [3.21]. Experience with implementations of FPT al...
متن کاملAN EFFICIENT OPTIMIZATION PROCEDURE BASED ON CUCKOO SEARCH ALGORITHM FOR PRACTICAL DESIGN OF STEEL STRUCTURES
Different kinds of meta-heuristic algorithms have been recently utilized to overcome the complex nature of optimum design of structures. In this paper, an integrated optimization procedure with the objective of minimizing the self-weight of real size structures is simply performed interfacing SAP2000 and MATLAB® softwares in the form of parallel computing. The meta-heuristic algorithm chosen he...
متن کاملImplementation of VlSI Based Image Compression Approach on Reconfigurable Computing System - A Survey
Image data require huge amounts of disk space and large bandwidths for transmission. Hence, imagecompression is necessary to reduce the amount of data required to represent a digital image. Thereforean efficient technique for image compression is highly pushed to demand. Although, lots of compressiontechniques are available, but the technique which is faster, memory efficient and simple, surely...
متن کاملTask Scheduling Algorithm Using Covariance Matrix Adaptation Evolution Strategy (CMA-ES) in Cloud Computing
The cloud computing is considered as a computational model which provides the uses requests with resources upon any demand and needs.The need for planning the scheduling of the user's jobs has emerged as an important challenge in the field of cloud computing. It is mainly due to several reasons, including ever-increasing advancements of information technology and an increase of applications and...
متن کاملA Genetic Based Resource Management Algorithm Considering Energy Efficiency in Cloud Computing Systems
Cloud computing is a result of the continuing progress made in the areas of hardware, technologies related to the Internet, distributed computing and automated management. The Increasing demand has led to an increase in services resulting in the establishment of large-scale computing and data centers, in addition to high operating costs and huge amounts of electrical power consumption. Insuffic...
متن کامل